Picture for Haijin Liang

Haijin Liang

VDE Bench: Evaluating The Capability of Image Editing Models to Modify Visual Documents

Add code
Jan 27, 2026
Viaarxiv icon

Rank4Gen: RAG-Preference-Aligned Document Set Selection and Ranking

Add code
Jan 16, 2026
Viaarxiv icon

Best Practices for Distilling Large Language Models into BERT for Web Search Ranking

Add code
Nov 07, 2024
Viaarxiv icon

Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing

Add code
Aug 22, 2022
Figure 1 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Figure 2 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Figure 3 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Figure 4 for Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity Typing
Viaarxiv icon

ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding

Add code
Aug 05, 2022
Figure 1 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Figure 2 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Figure 3 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Figure 4 for ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding
Viaarxiv icon

Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models

Add code
Jun 11, 2022
Figure 1 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Figure 2 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Figure 3 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Figure 4 for Bridging the Gap Between Training and Inference of Bayesian Controllable Language Models
Viaarxiv icon